机器学习和深度学习方法对医学的计算机辅助预测成为必需的,在乳房X光检查领域也具有越来越多的应用。通常,这些算法训练,针对特定任务,例如,病变的分类或乳房X乳线图的病理学状态的预测。为了获得患者的综合视图,随后整合或组合所有针对同一任务培训的模型。在这项工作中,我们提出了一种管道方法,我们首先培训一组个人,任务特定的模型,随后调查其融合,与标准模型合并策略相反。我们使用混合患者模型的深度学习模型融合模型预测和高级功能,以在患者水平上构建更强的预测因子。为此,我们提出了一种多分支深度学习模型,其跨不同任务和乳房X光检查有效地融合了功能,以获得全面的患者级预测。我们在公共乳房X线摄影数据,即DDSM及其策划版本CBIS-DDSM上培训并评估我们的全部管道,并报告AUC评分为0.962,以预测任何病变和0.791的存在,以预测患者水平对恶性病变的存在。总体而言,与标准模型合并相比,我们的融合方法将显着提高AUC得分高达0.04。此外,通过提供与放射功能相关的特定于任务的模型结果,提供了与放射性特征相关的任务特定模型结果,我们的管道旨在密切支持放射科学家的阅读工作流程。
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译
Although Reinforcement Learning (RL) has shown impressive results in games and simulation, real-world application of RL suffers from its instability under changing environment conditions and hyperparameters. We give a first impression of the extent of this instability by showing that the hyperparameters found by automatic hyperparameter optimization (HPO) methods are not only dependent on the problem at hand, but even on how well the state describes the environment dynamics. Specifically, we show that agents in contextual RL require different hyperparameters if they are shown how environmental factors change. In addition, finding adequate hyperparameter configurations is not equally easy for both settings, further highlighting the need for research into how hyperparameters influence learning and generalization in RL.
translated by 谷歌翻译
Human speech can be characterized by different components, including semantic content, speaker identity and prosodic information. Significant progress has been made in disentangling representations for semantic content and speaker identity in Automatic Speech Recognition (ASR) and speaker verification tasks respectively. However, it is still an open challenging research question to extract prosodic information because of the intrinsic association of different attributes, such as timbre and rhythm, and because of the need for unsupervised training schemes to achieve robust large-scale and speaker-independent ASR. The aim of this paper is to address the disentanglement of emotional prosody from speech based on unsupervised reconstruction. Specifically, we identify, design, implement and integrate three crucial components in our proposed speech reconstruction model Prosody2Vec: (1) a unit encoder that transforms speech signals into discrete units for semantic content, (2) a pretrained speaker verification model to generate speaker identity embeddings, and (3) a trainable prosody encoder to learn prosody representations. We first pretrain the Prosody2Vec representations on unlabelled emotional speech corpora, then fine-tune the model on specific datasets to perform Speech Emotion Recognition (SER) and Emotional Voice Conversion (EVC) tasks. Both objective and subjective evaluations on the EVC task suggest that Prosody2Vec effectively captures general prosodic features that can be smoothly transferred to other emotional speech. In addition, our SER experiments on the IEMOCAP dataset reveal that the prosody features learned by Prosody2Vec are complementary and beneficial for the performance of widely used speech pretraining models and surpass the state-of-the-art methods when combining Prosody2Vec with HuBERT representations. Some audio samples can be found on our demo website.
translated by 谷歌翻译
The findable, accessible, interoperable, and reusable (FAIR) data principles have provided a framework for examining, evaluating, and improving how we share data with the aim of facilitating scientific discovery. Efforts have been made to generalize these principles to research software and other digital products. Artificial intelligence (AI) models -- algorithms that have been trained on data rather than explicitly programmed -- are an important target for this because of the ever-increasing pace with which AI is transforming scientific and engineering domains. In this paper, we propose a practical definition of FAIR principles for AI models and create a FAIR AI project template that promotes adherence to these principles. We demonstrate how to implement these principles using a concrete example from experimental high energy physics: a graph neural network for identifying Higgs bosons decaying to bottom quarks. We study the robustness of these FAIR AI models and their portability across hardware architectures and software frameworks, and report new insights on the interpretability of AI predictions by studying the interplay between FAIR datasets and AI models. Enabled by publishing FAIR AI models, these studies pave the way toward reliable and automated AI-driven scientific discovery.
translated by 谷歌翻译
Recent developments in the methods of explainable AI (XAI) methods allow researchers to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input-output relationships and realizing how data connects with machine learning models. In this paper we explore interpretability of DNN models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC). We review a subset of existing top tagger models and explore different quantitative methods to identify which features play the most important roles in identifying the top jets. We also investigate how and why feature importance varies across different XAI metrics, how feature correlations impact their explainability, and how latent space representations encode information as well as correlate with physically meaningful quantities. Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models. We additionally illustrate the activity of hidden layers as Neural Activation Pattern (NAP) diagrams and demonstrate how they can be used to understand how DNNs relay information across the layers and how this understanding can help to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning. By incorporating observations from the interpretability studies, we obtain state-of-the-art top tagging performance from augmented implementation of existing network
translated by 谷歌翻译
变异量子算法(VQAS)为通过嘈杂的中间规模量子(NISQ)处理器获得量子优势提供了最有希望的途径。这样的系统利用经典优化来调整参数化量子电路(PQC)的参数。目标是最大程度地减少取决于从PQC获得的测量输出的成本函数。通常通过随机梯度下降(SGD)实现优化。在NISQ计算机上,由于缺陷和破坏性而引起的栅极噪声通过引入偏差会影响随机梯度的估计。量子误差缓解(QEM)技术可以减少估计偏差而无需量子数量增加,但它们又导致梯度估计的方差增加。这项工作研究了量子门噪声对SGD收敛的影响,而VQA的基本实例是变异的eigensolver(VQE)。主要目标是确定QEM可以增强VQE的SGD性能的条件。结果表明,量子门噪声在SGD的收敛误差(根据参考无噪声PQC评估)诱导非零误差 - 基础,这取决于噪声门的数量,噪声的强度以及可观察到的可观察到的特征性被测量和最小化。相反,使用QEM,可以获得任何任意小的误差。此外,对于有或没有QEM的误差级别,QEM可以减少所需的迭代次数,但是只要量子噪声水平足够小,并且在每种SGD迭代中允许足够大的测量值。最大切割问题的数值示例证实了主要理论发现。
translated by 谷歌翻译
机器学习(ML)为生物处理工程的发展做出了重大贡献,但其应用仍然有限,阻碍了生物过程自动化的巨大潜力。用于模型构建自动化的ML可以看作是引入另一种抽象水平的一种方式,将专家的人类集中在生物过程开发的最认知任务中。首先,概率编程用于预测模型的自动构建。其次,机器学习会通过计划实验来测试假设并进行调查以收集信息性数据来自动评估替代决策,以收集基于模型预测不确定性的模型选择的信息数据。这篇评论提供了有关生物处理开发中基于ML的自动化的全面概述。一方面,生物技术和生物工程社区应意识到现有ML解决方案在生物技术和生物制药中的应用的限制。另一方面,必须确定缺失的链接,以使ML和人工智能(AI)解决方案轻松实施在有价值的生物社区解决方案中。我们总结了几个重要的生物处理系统的ML实施,并提出了两个至关重要的挑战,这些挑战仍然是生物技术自动化的瓶颈,并减少了生物技术开发的不确定性。没有一个合适的程序;但是,这项综述应有助于确定结合生物技术和ML领域的潜在自动化。
translated by 谷歌翻译
夫妻通常在一起管理慢性疾病,管理层对患者及其浪漫伴侣造成了情感上的伤害。因此,认识到日常生活中每个伴侣的情绪可以提供对他们在慢性疾病管理中的情感健康的见解。当前,评估每个伴侣的情绪的过程是手动,时间密集和昂贵的。尽管夫妻之间存在着关于情感识别的作品,但这些作品都没有使用夫妻在日常生活中的互动中收集的数据。在这项工作中,我们收集了85小时(1,021个5分钟样本)现实世界多模式智能手表传感器数据(语音,心率,加速度计和陀螺仪)和自我报告的情绪数据(n = 612)(13个伙伴)(13)夫妻)在日常生活中管理2型糖尿病。我们提取了生理,运动,声学和语言特征,以及训练有素的机器学习模型(支持向量机和随机森林),以识别每个伴侣的自我报告的情绪(价和唤醒)。我们最佳模型的结果比偶然的结果更好,唤醒和价值分别为63.8%和78.1%。这项工作有助于建立自动情绪识别系统,最终使伙伴能够监视他们在日常生活中的情绪,并能够提供干预措施以改善其情感幸福感。
translated by 谷歌翻译
深度学习模型正在应用于越来越多的成功案例中,但是他们在现实世界中的表现如何?为了测试模型,组装了特定的清洁数据集。但是,当部署在现实世界中时,该模型将面临意外的分布(OOD)数据。在这项工作中,我们表明所谓的“放射科医生级” Chexnet模型未能识别所有OOD图像,并将其归类为肺部疾病。为了解决这个问题,我们提出了分发投票,这是一种对多标签分类的分布图像进行分类的新方法。使用在ID和OOD数据上训练的独立课程分布(ID)预测指标,我们平均达到99%的ID分类特异性和98%的敏感性,与胸部上以前的作品相比,端到端的性能显着提高X射线14个数据集。即使仅用ImageNet作为OOD数据训练并使用X射线OOD图像进行测试,我们的方法即使仅用Imagenet进行训练,也超过了其他基于输出的OOD检测器。
translated by 谷歌翻译